近年来,深度学习(DL)方法的流行程度急剧增加,并且在生物医学科学中的监督学习问题中的应用显着增长。但是,现代生物医学数据集中缺失数据的较高流行率和复杂性对DL方法提出了重大挑战。在这里,我们在深入学习的广义线性模型的背景下,对缺失数据进行了正式处理,这是一种监督的DL架构,用于回归和分类问题。我们提出了一种新的体系结构,即\ textit {dlglm},这是第一个能够在训练时在输入功能和响应中灵活地说明忽略和不可忽视的缺失模式之一。我们通过统计模拟证明,我们的方法在没有随机(MNAR)缺失的情况下胜过现有的监督学习任务方法。我们从UCI机器学习存储库中对银行营销数据集进行了案例研究,在该数据集中我们预测客户是否基于电话调查数据订阅了产品。
translated by 谷歌翻译
现代的高通量单细胞免疫分析技术,例如流量,质量细胞术和单细胞RNA测序,可以轻松地测量多种患者队列中数百万个细胞中大量蛋白质或基因特征的表达。虽然生物信息学方法可用于将免疫细胞异质性与感兴趣的外部变量(例如临床结果或实验标签)联系起来,但它们通常很难适应如此大量的概要细胞。为了减轻这种计算负担,通常有限的单元格是\ emph {sherped}或从每个患者中进行了采样。但是,现有的草图方法无法从稀有细胞群中充分分类稀有细胞,或者无法保留特定免疫细胞类型的真实频率。在这里,我们提出了一种基于内核牛群的新颖素描方法,该方法选择了所有细胞的有限子样本,同时保留了免疫细胞类型的潜在频率。我们在三个流量和质量细胞仪数据集以及一个单细胞RNA测序数据集上测试了方法,并证明了素描的单元格(1)更准确地表示整体蜂窝景观,(2)促进下游分析任务的性能提高,例如根据患者的临床结果对患者进行分类。 \ url {https://github.com/vishalathreya/set-summarization}公开获得用内核放牧的素描实现。
translated by 谷歌翻译
近年来,深度学习(DL)方法的流行程度大大增加。尽管在图像数据的分类和操纵中证明了其最初的成功,但DL方法应用于生物医学科学中的问题的应用已显着增长。但是,生物医学数据集中缺失数据的较高流行率和复杂性对DL方法提出了重大挑战。在这里,我们在变化自动编码器(VAE)的背景下提供了对缺失数据的正式处理,这是一种普遍用于缩小尺寸,插补和学习复杂数据的潜在表示的流行无监督的DL体系结构。我们提出了一种新的VAE架构Nimiwae,这是第一个在训练时在输入功能中灵活解释可忽视和不可忽视的缺失模式之一。训练后,可以从缺失数据的后验分布中得出样本,可用于多个插补,从而促进高维不完整数据集的下游分析。我们通过统计模拟证明,我们的方法优于无监督的学习任务和插定精度的现有方法。我们以与12,000名ICU患者有关的EHR数据集的案例研究结束,该数据集具有大量诊断测量和临床结果,其中仅观察到许多特征。
translated by 谷歌翻译
大多数传统的文本到视频检索系统都在静态环境中运行,即,除了用户提供的初始文本查询之外,用户与代理之间没有相互作用。如果初始查询具有歧义,这可能是最佳的,这将导致许多错误的视频检索。为了克服这一限制,我们提出了一个新颖的框架,用于使用对话框(VIRED)进行视频检索,该框架使用户能够通过多轮对话框与AI代理进行交互,用户通过回答由AI代理产生的问题来完善结果的结果。我们的新颖的多模式问题生成器学会了提出问题,以最大程度地提高随后的视频检索性能,使用(i)在与用户的最后一轮互动中检索到的视频候选者以及(ii)基于文本的对话框历史记录所有以前的交互,以生成生成生成结合了与视频检索相关的视觉和语言提示的问题。此外,为了产生最大信息的问题,我们提出了一个信息引导的监督(IGS),该监督指导生成器提出问题,以提高随后的视频检索准确性。我们在AVSD数据集上验证了我们的交互式驾驶框架的有效性,这表明我们的交互式方法的性能明显优于传统的非交互式视频检索系统。我们还证明,我们提出的方法将涉及与真实人类互动的现实环境推广,从而证明了我们框架的稳健性和普遍性
translated by 谷歌翻译
现代单细胞流量和质量细胞仪技术测量血液或组织样品中单个细胞的几种蛋白质的表达。因此,每个分析的生物样品都由数十万个多维细胞特征向量表示,这会产生高计算成本,以预测每个生物样品与机器学习模型的相关表型。如此大的固定基础性也限制了机器学习模型的可解释性,因为难以跟踪每个单个单个细胞如何影响最终预测。我们建议使用内核平均嵌入来编码每个分类生物样品的细胞景观。尽管我们最重要的目标是制作一个更透明的模型,但我们发现我们的方法与通过简单的线性分类器相比,您的方法获得了可比性或更好的精度。结果,我们的模型包含很少的参数,但仍与具有数百万参数的深度学习模型相似。与深度学习方法相反,我们模型的线性和子选择步骤使解释分类结果变得容易。分析进一步表明,我们的方法可以接受丰富的生物学解释性,以将细胞异质性与临床表型联系起来。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The devastation caused by the coronavirus pandemic makes it imperative to design automated techniques for a fast and accurate detection. We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling Attention-based Multi-scaled Convolution network (EAMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The Attention module combines contextual with local information, at multiple scales, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EAMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics. Its clinical significance lies in its unprecedented scope in providing low-cost decision-making for patients lacking specialized healthcare at remote locations.
translated by 谷歌翻译
Objective: Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an electrocardiogram (ECG) is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for accurate ECG-based prediction of electrolyte concentrations. Methods: We explore the use of deep neural networks (DNNs) for this task. We analyze the regression performance across four electrolytes, utilizing a novel dataset containing over 290000 ECGs. For improved understanding, we also study the full spectrum from continuous predictions to binary classification of extreme concentration levels. To enhance clinical usefulness, we finally extend to a probabilistic regression approach and evaluate different uncertainty estimates. Results: We find that the performance varies significantly between different electrolytes, which is clinically justified in the interplay of electrolytes and their manifestation in the ECG. We also compare the regression accuracy with that of traditional machine learning models, demonstrating superior performance of DNNs. Conclusion: Discretization can lead to good classification performance, but does not help solve the original problem of predicting continuous concentration levels. While probabilistic regression demonstrates potential practical usefulness, the uncertainty estimates are not particularly well-calibrated. Significance: Our study is a first step towards accurate and reliable ECG-based prediction of electrolyte concentration levels.
translated by 谷歌翻译